Conversation
…ay health check The cluster health check now verifies that /opt/openshell/bin/openshell-sandbox exists in the gateway container. Without this binary, every sandbox pod crashes with 'no such file or directory' but the gateway previously reported healthy. - Add HEALTHCHECK_MISSING_SUPERVISOR marker to cluster-healthcheck.sh - Add early detection in bootstrap polling loop (runtime.rs) - Add structured error diagnosis with recovery steps (errors.rs) - Update debug-openshell-cluster skill with new failure pattern
Large sandbox images (e.g. 852MB base image) can take ~3 minutes to pull, exceeding the previous 120s timeout. Bump to 300s across all surfaces: CLI watch loop, server orphan grace period, TUI ready poll, Python SDK wait_ready default, and E2E test harness.
… network Teardown of existing gateway resources (container, volume, image) is now performed inside deploy_gateway_with_logs() so both 'gateway start' and the auto-bootstrap path in 'sandbox create' get identical cleanup. Remove the dedicated 'openshell-cluster' bridge network — it was only used by a single container and the default Docker bridge is sufficient. This eliminates the network create/remove retry loops and stale endpoint cleanup that added complexity without benefit. Fix the provisioning stream timeout in run.rs: wrap stream.next() in tokio::time::timeout() so the 300s deadline fires even when the gRPC stream stops producing events.
… gateway Instead of unconditionally tearing down existing gateway resources on every `gateway start`, restore the interactive prompt that asks the user whether to destroy and recreate. When --recreate is passed, the prompt is skipped and resources are destroyed directly. In non-interactive mode, the existing gateway is reused silently. The deploy_gateway_with_logs function now respects a `recreate` field on DeployOptions — only destroying Docker resources (container, image, volume) when explicitly requested. The auto-bootstrap path sets recreate=true to handle stale Docker resources without metadata.
e37ba99 to
a27aaf4
Compare
drew
added a commit
that referenced
this pull request
Mar 14, 2026
PR #281 removed the shared openshell-cluster Docker network in favor of the default bridge. This restores custom bridge networking but makes each gateway use its own isolated network named openshell-cluster-{name}, matching the existing container/volume naming convention. Changes: - Add network_name() to constants.rs for per-gateway network naming - Add ensure_network() with retry/backoff and force_remove_network() parameterized by network name instead of a global constant - Attach containers to their per-gateway network via network_mode - Disconnect and remove the network during gateway destroy - Wire ensure_network() into the deploy flow before ensure_volume() - Update architecture docs to reflect per-gateway network isolation
drew
added a commit
that referenced
this pull request
Mar 14, 2026
PR #281 removed the shared openshell-cluster Docker network in favor of the default bridge. This restores custom bridge networking but makes each gateway use its own isolated network named openshell-cluster-{name}, matching the existing container/volume naming convention. Changes: - Add network_name() to constants.rs for per-gateway network naming - Add ensure_network() with retry/backoff and force_remove_network() parameterized by network name instead of a global constant - Attach containers to their per-gateway network via network_mode - Disconnect and remove the network during gateway destroy - Wire ensure_network() into the deploy flow before ensure_volume() - Update architecture docs to reflect per-gateway network isolation
drew
added a commit
that referenced
this pull request
Mar 16, 2026
PR #281 removed the shared openshell-cluster Docker network in favor of the default bridge. This restores custom bridge networking but makes each gateway use its own isolated network named openshell-cluster-{name}, matching the existing container/volume naming convention. Changes: - Add network_name() to constants.rs for per-gateway network naming - Add ensure_network() with retry/backoff and force_remove_network() parameterized by network name instead of a global constant - Attach containers to their per-gateway network via network_mode - Disconnect and remove the network during gateway destroy - Wire ensure_network() into the deploy flow before ensure_volume() - Update architecture docs to reflect per-gateway network isolation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
HEALTHCHECK_MISSING_SUPERVISORcheck tocluster-healthcheck.shthat verifies/opt/openshell/bin/openshell-sandboxexists and is executableruntime.rs) sogateway startfails fast with actionable guidance instead of timing out after 6 minuteserrors.rswith recovery steps (rebuild image, recreate gateway)debug-openshell-clusterskill with the new failure pattern and diagnostic commandContext
When the published cluster image is missing the sandbox supervisor binary (e.g. built before the
supervisor-builderstage was added), the gateway reports healthy but every sandbox pod crashes immediately with:This is a confusing failure because the gateway health check passes,
openshell statusshows the server is up, but no sandboxes can start. The fix ensures the health check catches this condition and the bootstrap surfaces a clear error with recovery instructions.Test Plan
cargo check -p openshell-bootstrappassesopenshell-bootstrappassmise run pre-commitpasses